How Do We Regulate a Technology That Can Both Put an End to Mass Shootings and Make Them More Deadly?

9 min read

As the technology behind artificial intelligence continues to advance and expand into more markets and applications, the ethical ramifications behind this scientific breakthrough are becoming more complicated. Recently, the New York Times released a short documentary titled “A.I. Is Making It Easier to Kill (You). Here’s How.” Once you get past the alarmist title, the video raises serious dilemmas about improving this technology that major powers are weaponizing to gain an edge in modern war. Killer robots and automated driverless drones are already being used today around the world to increase the efficiency of the army in possession of the technology, while exponentially magnifying the death toll of the enemy. This documentary got me thinking: if we are already using these technologies in our militaries, how far away are we from civilian use?

Dr. Gordon Cooke, Director of Research and Strategy at the United States Military Academy at West Point, had a similar thought. In an article published on the US Army website, he outlined the pros and cons of automated weaponry on the battlefield and the risk of it falling into civilian hands. “A variety of instructions, how-to videos and even off-the-shelf, trained A.I. software is readily available online that can be easily adapted to available weapons. Automated gun turrets used by hobbyists for paintball and airsoft guns have demonstrated the ability to hit more than 70 percent of moving targets” he explained. For context, he added that “the Army rifle qualification course only requires a Soldier to hit 58 percent of stationary targets to qualify as a marksman on their weapon” and that “soldiers who hit 75 percent of stationary targets receive a sharpshooter qualification.” That means that automated gun turrets have the potential to be more accurate and deadlier than a trained marksman- or even sharpshooter- for the US military.

It isn’t just about the accuracy of the guns either. Weapons equipped with A.I and robotics can assist soldiers in resupply and transportation efforts as well — anticipating their needs before soldiers even do so themselves. Dr. Cooke explained that artificial intelligence can work on rerouting trucks with supplies depending on traffic conditions and need to the battlefield, and AI can assist small robots or drones in delivering the ammunition directly to the troops. Now, imagine this technology being used instead in the instance of a mass shooting. The shooter can use artificial intelligence to deliver supplies to them before they run out of bullets and attempt to reload, a critical moment in which bystanders often attempt to intervene and take down the shooter. A shooter would not even need to visit the site of the attack themselves with these new technological capabilities- instead of sending a drone equipped with automated weapons, a camera, and A.I.- while monitoring from a safe, undisclosed location. Not only can shooters use A.I. to make their shooting sprees more deadly while protecting their own lives and identities, but shooters can also program into the A.I. specific objectives and parameters of exactly who they want the drones to target, making hate crimes even more probable and easier to carry out. 

We’ve established that artificial intelligence in the wrong hands has the potential to make mass shootings far more deadly, but in the right hands, it has been argued that artificial intelligence can help stop mass shootings before they happen and save lives. One of the most recent trends in security is A.I-equipped surveillance cameras that are trained to identify suspicious people, behavior, and objects. They collect and analyze data, then pass it along to humans to review. One of the biggest markets for A.I-equipped surveillance, otherwise known as intelligent video or real-time video analytics, are schools. In 2018, it was estimated that the largest market for video surveillance in the country was for schools at over $450 million. Several private companies have teamed up with school districts across the country, including ZeroEyes and Athena Security. However, both companies “claim their systems can detect weapons with more than 90 percent accuracy but acknowledge their products haven’t been tested in a real-life scenario.” Besides a missing track record, another fault of the system is its inability to detect concealed weapons, which might be the key to stopping a shooting before the crucial moment when a shooter pulls out a weapon and begins firing. However, supporters of the system contend that at the minimum, these systems are able to provide law enforcement with more information and can give people more time to seek shelter.

Another institution that recently benefited from Athena Security’s A.I.-equipped surveillance cameras is the Al Noor Mosque in Christchurch, New Zealand. This was the site of the country’s deadliest mass shooting in March of this year in which a gunman, specifically targeting Muslims, charged into the mosque to kill 51 people and injured 49 more during worship. After the attack, Athena Security announced they would be installing one of their systems to protect worshippers in the future. Each camera costs $100 a month and works by using A.I. to detect guns. When a gun is detected, the system immediately alerts the authorities and attempts to deter the shooter by warning them that the authorities are on the way. So far, the system appears to be working well, as it passed a recent firearms test and was able to identify every kind of gun brandished before it. The only shortcoming that Athena Security is working on fixing is programming the A.I. to be able to identify other smaller weapons, such as a blade, a knife, or even violent actions like fighting.

It isn’t just schools or religious institutions that are utilizing these real-time video analytics- there is widespread use in police forces across the United States as well. Cities such as Atlanta, New Orleans, and New York all use cameras equipped with A.I. Hartford, Connecticut’s police force has an extensive network of over 500 cameras and several A.I.-equipped units that they use to search hours of video to find people or vehicles. Although helpful in solving crime, critics have argued that this violates people’s privacy and has the potential to be racially biased. A study conducted at the University of Maryland in 2018 found racial bias in artificial intelligence, as some A.I. interpreted black faces as appearing angrier than white faces. Humans (often white and/or male) are the creators and programmers of artificial intelligence, and humans are imperfect and biased. Thus, it can be assumed that artificial intelligence will also share these human biases, which can result in the systems’ unfair targeting of minorities.

With artificial intelligence being used on both sides of a mass shooting- both in law enforcement to try to stop an attack before it happens, and potentially being used by the shooter themselves to increase the capabilities of their weapon while protecting themselves- how can lawmakers effectively regulate this technology while continuing to allow for innovation to improve its intended uses? In the NYTimes documentary, they covered how lawmakers are lagging behind the fast pace of technological innovation, focusing on the international level. Showing footage of several United Nations’ sessions, it became clear that strong military states such as the United States, Israel, Korea, and Russia are all staunchly opposed to the regulation of automated weapons, with all of these countries already having invested millions of dollars in these technologies to enhance the effectiveness of their armed forces. Meanwhile, developing countries that cannot afford such technologies express their concerns that their people are the ones on the other end of these weapons, seen as potential targets or civilian casualties. The Campaign to Stop Killer Robots, a group made up of activists, non-profit and civil service organizations, has written a petition for a ban on autonomous weapons, of which 30 countries (nearly all developing), 100 NGOs, the European Parliament, 21 Nobel peace laureates, the U.N. Secretary General, and over 4,500 A.I. researchers have all signed on to support. However, the U.N. aims to work as a consensus on the passage of resolutions and has failed to achieve progress on this issue due to the disagreements between developing and developed countries on the use of automated weaponry in militaries. In the meantime, technology continues to outpace the slow diplomacy of the United Nations, who are stuck at a stalemate on the definition of what classifies as an autonomous weapon. Stuart Russell, Vice Chair of the World Economic’s Forum on A.I. and Robots, Advisor to the U.N. on arms control, explains the conundrum: “It could easily take another 10 years before they [the U.N.] decide on a definition for what an autonomous weapon is. And by that time it could be too late. I think for some countries that’s the point.”

While the US military continues investing in autonomous weapons and killer robot technology, President Trump has called on the tech community to use analytics and detection software to “stop mass murders before they start,” citing that almost every mass shooting had warning signs that A.I. could recognize. Companies have certainly been using these technologies already, including Palantir, a company run by Peter Thiel, co-founder of PayPal and a fervent Trump supporter. Reports have shown that Palantir is using online tracking “similar to a Minority Report-like ‘pre-cog’ or ‘pre-crime’ capability.” In 2017, a study found that A.I. was able to use Instagram to recognize predictive markers for depression, and the artificial intelligence outperformed its human colleagues — it even correctly analyzed images from before the patient was diagnosed with depression. Although this is certainly impressive and the progress is promising, identifying a mass shooter based on images on social media or facial expressions alone could prove to be much more difficult. Not every mass shooter is mentally ill, and even those that are may not be facing depression. As Arthur C. Evans, CEO of the American Psychological Association explains: “There is no single personality profile that can reliably predict who will resort to gun violence. Based on the research, we know only that a history of violence is the single best predictor of who will commit future violence.” Evans goes on to say in his press release that the only solution to ending mass shootings would not be to increase our technological advances to predict these events, but rather, to limit access to the deadly weapons in the first place. He is not alone in that thought, as many experts concur that stricter gun laws would reduce the frequencies of mass shootings. The public seems to think so too, with about 60% of Americans already in favor of stricter gun laws, according to a recent 2019 Gallup poll.

With all the support from experts and the American public for a change in policies rather than advancing experimental technologies, why are there still over a hundred Americans being killed a day by guns? One answer could be the three major players in this fight: tech companies, the NRA, and our government. None of the three are accepting responsibility for their part in this crisis or working to change the reality but rather, only blaming each other. Tech companies like Facebook have emboldened white supremacists by allowing them to livestream shootings on their platform, like the one at Al Noor Mosque in Christchurch, New Zealand. These viral videos and posts then sensationalize the violence and encourages copycats. The NRA has insisted that guns are not to blame, but people are to blame, and have lobbied their agenda to the point that some lawmakers confuse it with the Second Amendment. Finally, we have our own government. The representatives of our democracy who should be acting on the interests of the people instead accept bribes and gifts from groups like the NRA, and then offer nothing but their “thoughts and prayers” when the next shooting occurs. Of course, it’s only natural for them to fantasize about the idea of artificial intelligence taking care of this problem altogether, halting a shooting before it even can occur. But the tech world says we are far from ready due to the presence of human biases and imperfect machines.

algorithms

In the United States, our legislators have barely touched on the topic of artificial intelligence in the technology’s infancy. Now, they are attempting to play catch-up and are drafting legislation to regulate algorithms and A.I. as best they can. However, some argue that it’s too little, too late. The slow pace of government bureaucracy has always struggled to keep up with the innovation of technology but that doesn’t mean that they should just give the tech world free reign over the digital empire they’re building. One of the issues facing Congress is the conflict between the workforce and A.I. With machine learning and algorithms, workers are finding it hard to work alongside or even compete with A.I. due to algorithm bias or automation, leading to job loss and displacement. Michael Lotito, co-chair of Littler Mendelson’s Workplace Policy Institute, explains “Congress is not seriously moving the ball. There is no national call to action.” An example of lawmakers’ apathy towards regulating A.I. can be seen in the response to the Algorithmic Accountability Act (H.R. 2231, S. 1108), introduced by Sen. Cory Booker (D-N.J.), Sen. Ron Wyden (D-OR), and Rep. Yvette Clarke (D-N.Y.) on April 10th of 2019. This bill, that could help regulate many industries using artificial intelligence, data analysis, and advanced algorithms “would require large companies to audit their algorithms for potential bias and discrimination and to submit impact assessments to Federal Trade Commission officials.” The same month it was written, the bill was introduced to the Energy and Commerce Committee, where it still remains. The public views the bill favorably, and it has been endorsed by a number of tech and civil rights groups, including Data for Black Lives, the Center on Privacy and Technology at Georgetown Law, and the National Hispanic Media Coalition. Despite this, it has been given only a 3% chance to be enacted, as predicted by Skopos Lab- a company made up “artificial intelligence researches, data scientists, data engineers, software engineers, domain experts, and financial professionals.” Ironically, Skopos Lab has created a methodology called Automated Predictive Intelligence that they use to make these predictions on a bill’s potential for enactment.

So, what is the solution when A.I. can be used for both good and for evil? When no one actor is responsible for mass shootings, and no one is taking responsibility? When legislators try to take action but our institutional bureaucracy fails? It’s a careful balance that lawmakers and society need to find a way to strike. What I can say is that innovation is not a concept reserved for the tech industry. Innovation can exist in our government, too. When we observe good bills and proposals sitting in committee that are not taken seriously, or groups like the NRA driving our politicians instead of the people’s voice, we know it is time for a change. Let’s innovate our governmental institutions to keep up with the pace of technological innovations and then maybe we’ll be equipped to enact the appropriate regulations so that humans and technology can coexist peacefully in synergy.

Haven Miller Haven is a recent Master of Urban Planning graduate from NYU Robert F. Wagner Graduate School of Public Service fascinated in the intersection of technology and government innovation, transportation planning, and planning resilient cities to withstand climate change. In her graduate studies, Haven concentrated in the field of international development planning, and worked as a graduate consultant for the United Nations Capital Development Fund (UNCDF) for her culminating capstone project. On this project, she and a team of her peers studied in depth the subject of localizing the sustainable development goals, and what steps must be taken to give subnational governments a seat at the table about the 2030 Agenda. Haven is passionate about decentralized development planning and international cooperation, and believes that the use of advanced technology in our governmental institutions will lead to greater innovation for the global good.

Leave a Reply

Your email address will not be published. Required fields are marked *